post-hoc interpretability method
XAI4Extremes: An interpretable machine learning framework for understanding extreme-weather precursors under climate change
Wei, Jiawen, Bora, Aniruddha, Oommen, Vivek, Dong, Chenyu, Yang, Juntao, Adie, Jeff, Chen, Chen, See, Simon, Karniadakis, George, Mengaldo, Gianmarco
Extreme weather events are increasing in frequency and intensity due to climate change. This, in turn, is exacting a significant toll in communities worldwide. While prediction skills are increasing with advances in numerical weather prediction and artificial intelligence tools, extreme weather still present challenges. More specifically, identifying the precursors of such extreme weather events and how these precursors may evolve under climate change remain unclear. In this paper, we propose to use post-hoc interpretability methods to construct relevance weather maps that show the key extreme-weather precursors identified by deep learning models. We then compare this machine view with existing domain knowledge to understand whether deep learning models identified patterns in data that may enrich our understanding of extreme-weather precursors. We finally bin these relevant maps into different multi-year time periods to understand the role that climate change is having on these precursors. The experiments are carried out on Indochina heatwaves, but the methodology can be readily extended to other extreme weather events worldwide.
- North America > United States (0.28)
- Asia > Singapore (0.05)
- Europe (0.04)
- (5 more...)
In Defence of Post-hoc Explainability
The widespread adoption of machine learning in scientific research has created a fundamental tension between model opacity and scientific understanding. Whilst some advocate for intrinsically interpretable models, we introduce Computational Interpretabilism (CI) as a philosophical framework for post-hoc interpretability in scientific AI. Drawing parallels with human expertise, where post-hoc rationalisation coexists with reliable performance, CI establishes that scientific knowledge emerges through structured model interpretation when properly bounded by empirical validation. Through mediated understanding and bounded factivity, we demonstrate how post-hoc methods achieve epistemically justified insights without requiring complete mechanical transparency, resolving tensions between model complexity and scientific comprehension.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Law (0.68)
- Health & Medicine (0.67)
Revisiting the robustness of post-hoc interpretability methods
Wei, Jiawen, Turbé, Hugues, Mengaldo, Gianmarco
Post-hoc interpretability methods play a critical role in explainable artificial intelligence (XAI), as they pinpoint portions of data that a trained deep learning model deemed important to make a decision. However, different post-hoc interpretability methods often provide different results, casting doubts on their accuracy. For this reason, several evaluation strategies have been proposed to understand the accuracy of post-hoc interpretability. Many of these evaluation strategies provide a coarse-grained assessment -- i.e., they evaluate how the performance of the model degrades on average by corrupting different data points across multiple samples. While these strategies are effective in selecting the post-hoc interpretability method that is most reliable on average, they fail to provide a sample-level, also referred to as fine-grained, assessment. In other words, they do not measure the robustness of post-hoc interpretability methods. We propose an approach and two new metrics to provide a fine-grained assessment of post-hoc interpretability methods. We show that the robustness is generally linked to its coarse-grained performance.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > Singapore > Central Region > Singapore (0.04)
- Europe > Switzerland > Geneva > Geneva (0.04)
- Europe > Romania > Sud - Muntenia Development Region > Giurgiu County > Giurgiu (0.04)
Multicriteria interpretability driven Deep Learning
Recent software and hardware democratized DL methods allowing scholars and practitioners to apply them in their fields. On the software side, recent frameworks as Tensorflow (Abadi et al., 2015) and PyTorch (Paszke et al., 2019) allowed to create complex DL models avoiding the need to write ad-hoc compilers as did by LeCun et al. (1990). On the hardware side, the decrease in the cost of the necessary hardware to train such models, allowed many people to build and deploy sophisticated Neural Networks with minimal costs (Zhang et al., 2018). The democratization of such powerful technologies allowed many fields to benefit from it aside from computer science. Some of those that benefitted the most are Economics (Nosratabadi et al., 2020), and Finance (Ozbayoglu et al., 2020). DL applications have piqued the interest of governments, who are concerned about possible social implications. It is well known that these models necessitate extra vigilance when it comes to training data in order to minimize biases of any kind, especially in high-stakes judgments (Rudin, 2019). To counter these side effects, the governments enacted several regulatory standards, and the jurisprudence started to elaborate on the right to explanation concept (Dexe et al., 2020). In this effort to build interpretable but DL grounded models, scholars have started developing post-hoc interpretation methods.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Austria > Vienna (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
- Banking & Finance (1.00)
- Information Technology (0.68)